393 research outputs found

    Grid Cells, Place Cells, and Geodesic Generalization for Spatial Reinforcement Learning

    Get PDF
    Reinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of spatial navigation, examining how two of the brain's location representations—hippocampal place cells and entorhinal grid cells—are adapted to serve as basis functions for approximating value over space for RL. Although much previous work has focused on these systems' roles in combining upstream sensory cues to track location, revisiting these representations with a focus on how they support this downstream decision function offers complementary insights into their characteristics. Rather than localization, the key problem in learning is generalization between past and present situations, which may not match perfectly. Accordingly, although neural populations collectively offer a precise representation of position, our simulations of navigational tasks verify the suggestion that RL gains efficiency from the more diffuse tuning of individual neurons, which allows learning about rewards to generalize over longer distances given fewer training experiences. However, work on generalization in RL suggests the underlying representation should respect the environment's layout. In particular, although it is often assumed that neurons track location in Euclidean coordinates (that a place cell's activity declines “as the crow flies” away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes

    Recognition without identification, erroneous familiarity, and déjà vu

    Get PDF
    Déjà vu is characterized by the recognition of a situation concurrent with the awareness that this recognition is inappropriate. Although forms of déjà vu resolve in favor of the inappropriate recognition and therefore have behavioral consequences, typical déjà vu experiences resolve in favor of the awareness that the sensation of recognition is inappropriate. The resultant lack of behavioral modification associated with typical déjà vu means that clinicians and experimenters rely heavily on self-report when observing the experience. In this review, we focus on recent déjà vu research. We consider issues facing neuropsychological, neuroscientific, and cognitive experimental frameworks attempting to explore and experimentally generate the experience. In doing this, we suggest the need for more experimentation and amore cautious interpretation of research findings, particularly as many techniques being used to explore déjà vu are in the early stages of development.PostprintPeer reviewe

    The role of ongoing dendritic oscillations in single-neuron dynamics

    Get PDF
    The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as temporally local, near-instantaneous mappings from the current input of the cell to its current output, brought about by somatic summation of dendritic contributions that are generated in spatially localized functional compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations, and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought

    An analysis of waves underlying grid cell firing in the medial enthorinal cortex

    Get PDF
    Layer II stellate cells in the medial enthorinal cortex (MEC) express hyperpolarisation-activated cyclic-nucleotide-gated (HCN) channels that allow for rebound spiking via an I_h current in response to hyperpolarising synaptic input. A computational modelling study by Hasselmo [2013 Neuronal rebound spiking, resonance frequency and theta cycle skipping may contribute to grid cell firing in medial entorhinal cortex. Phil. Trans. R. Soc. B 369: 20120523] showed that an inhibitory network of such cells can support periodic travelling waves with a period that is controlled by the dynamics of the I_h current. Hasselmo has suggested that these waves can underlie the generation of grid cells, and that the known difference in I_h resonance frequency along the dorsal to ventral axis can explain the observed size and spacing between grid cell firing fields. Here we develop a biophysical spiking model within a framework that allows for analytical tractability. We combine the simplicity of integrate-and-fire neurons with a piecewise linear caricature of the gating dynamics for HCN channels to develop a spiking neural field model of MEC. Using techniques primarily drawn from the field of nonsmooth dynamical systems we show how to construct periodic travelling waves, and in particular the dispersion curve that determines how wave speed varies as a function of period. This exhibits a wide range of long wavelength solutions, reinforcing the idea that rebound spiking is a candidate mechanism for generating grid cell firing patterns. Importantly we develop a wave stability analysis to show how the maximum allowed period is controlled by the dynamical properties of the I_h current. Our theoretical work is validated by numerical simulations of the spiking model in both one and two dimensions

    Neural models that convince: Model hierarchies and other strategies to bridge the gap between behavior and the brain.

    Get PDF
    Computational modeling of the brain holds great promise as a bridge from brain to behavior. To fulfill this promise, however, it is not enough for models to be 'biologically plausible': models must be structurally accurate. Here, we analyze what this entails for so-called psychobiological models, models that address behavior as well as brain function in some detail. Structural accuracy may be supported by (1) a model's a priori plausibility, which comes from a reliance on evidence-based assumptions, (2) fitting existing data, and (3) the derivation of new predictions. All three sources of support require modelers to be explicit about the ontology of the model, and require the existence of data constraining the modeling. For situations in which such data are only sparsely available, we suggest a new approach. If several models are constructed that together form a hierarchy of models, higher-level models can be constrained by lower-level models, and low-level models can be constrained by behavioral features of the higher-level models. Modeling the same substrate at different levels of representation, as proposed here, thus has benefits that exceed the merits of each model in the hierarchy on its own

    A Computational Model of the Development of Separate Representations of Facial Identity and Expression in the Primate Visual System

    Get PDF
    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression

    Evaluation of the Oscillatory Interference Model of Grid Cell Firing through Analysis and Measured Period Variance of Some Biological Oscillators

    Get PDF
    Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol) recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5µ3/(4πσ)2 seconds where µ is the mean period of an oscillator in seconds and σ2 its variance in seconds2. We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum) and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed

    Imbalanced pattern completion vs. separation in cognitive disease: network simulations of synaptic pathologies predict a personalized therapeutics strategy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Diverse Mouse genetic models of neurodevelopmental, neuropsychiatric, and neurodegenerative causes of impaired cognition exhibit at least four convergent points of synaptic malfunction: 1) Strength of long-term potentiation (LTP), 2) Strength of long-term depression (LTD), 3) Relative inhibition levels (Inhibition), and 4) Excitatory connectivity levels (Connectivity).</p> <p>Results</p> <p>To test the hypothesis that pathological increases or decreases in these synaptic properties could underlie imbalances at the level of basic neural network function, we explored each type of malfunction in a simulation of autoassociative memory. These network simulations revealed that one impact of impairments or excesses in each of these synaptic properties is to shift the trade-off between pattern separation and pattern completion performance during memory storage and recall. Each type of synaptic pathology either pushed the network balance towards intolerable error in pattern separation or intolerable error in pattern completion. Imbalances caused by pathological impairments or excesses in LTP, LTD, inhibition, or connectivity, could all be exacerbated, or rescued, by the simultaneous modulation of any of the other three synaptic properties.</p> <p>Conclusions</p> <p>Because appropriate modulation of any of the synaptic properties could help re-balance network function, regardless of the origins of the imbalance, we propose a new strategy of personalized cognitive therapeutics guided by assay of pattern completion vs. pattern separation function. Simulated examples and testable predictions of this theorized approach to cognitive therapeutics are presented.</p

    The Influence of Markov Decision Process Structure on the Possible Strategic Use of Working Memory and Episodic Memory

    Get PDF
    Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues) or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task). The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks

    In vivo functional neurochemistry of human cortical cholinergic function during visuospatial attention

    Get PDF
    Cortical acetylcholine is involved in key cognitive processes such as visuospatial attention. Dysfunction in the cholinergic system has been described in a number of neuropsychiatric disorders. Levels of brain acetylcholine can be pharmacologically manipulated, but it is not possible to directly measure it in vivo in humans. However, key parts of its biochemical cascade in neural tissue, such as choline, can be measured using magnetic resonance spectroscopy (MRS). There is evidence that levels of choline may be an indirect but proportional measure of acetylcholine availability in brain tissue. In this study, we measured relative choline levels in the parietal cortex using functional (event-related) MRS (fMRS) during performance of a visuospatial attention task, with a modelling approach verified using simulated data. We describe a task-driven interaction effect on choline concentration, specifically driven by contralateral attention shifts. Our results suggest that choline MRS has the potential to serve as a proxy of brain acetylcholine function in humans
    corecore